In the process of materials discovery, chemists currently need to perform many laborious, time-consuming, and often dangerous lab experiments. To accelerate this process, we propose a framework for robots to assist chemists by performing lab experiments autonomously. The solution allows a general-purpose robot to perform diverse chemistry experiments and efficiently make use of available lab tools. Our system can load high-level descriptions of chemistry experiments, perceive a dynamic workspace, and autonomously plan the required actions and motions to perform the given chemistry experiments with common tools found in the existing lab environment. Our architecture uses a modified PDDLStream solver for integrated task and constrained motion planning, which generates plans and motions that are guaranteed to be safe by preventing collisions and spillage. We present a modular framework that can scale to many different experiments, actions, and lab tools. In this work, we demonstrate the utility of our framework on three pouring skills and two foundational chemical experiments for materials synthesis: solubility and recrystallization. More experiments and updated evaluations can be found at https://ac-rad.github.io/arc-icra2023.
translated by 谷歌翻译
Diagnostic radiologists need artificial intelligence (AI) for medical imaging, but access to medical images required for training in AI has become increasingly restrictive. To release and use medical images, we need an algorithm that can simultaneously protect privacy and preserve pathologies in medical images. To develop such an algorithm, here, we propose DP-GLOW, a hybrid of a local differential privacy (LDP) algorithm and one of the flow-based deep generative models (GLOW). By applying a GLOW model, we disentangle the pixelwise correlation of images, which makes it difficult to protect privacy with straightforward LDP algorithms for images. Specifically, we map images onto the latent vector of the GLOW model, each element of which follows an independent normal distribution, and we apply the Laplace mechanism to the latent vector. Moreover, we applied DP-GLOW to chest X-ray images to generate LDP images while preserving pathologies.
translated by 谷歌翻译
In recent years, various service robots have been introduced in stores as recommendation systems. Previous studies attempted to increase the influence of these robots by improving their social acceptance and trust. However, when such service robots recommend a product to customers in real environments, the effect on the customers is influenced not only by the robot itself, but also by the social influence of the surrounding people such as store clerks. Therefore, leveraging the social influence of the clerks may increase the influence of the robots on the customers. Hence, we compared the influence of robots with and without collaborative customer service between the robots and clerks in two bakery stores. The experimental results showed that collaborative customer service increased the purchase rate of the recommended bread and improved the impression regarding the robot and store experience of the customers. Because the results also showed that the workload required for the clerks to collaborate with the robot was not high, this study suggests that all stores with service robots may show high effectiveness in introducing collaborative customer service.
translated by 谷歌翻译
在本文中,我们报告了一项现场研究,在该研究中,我们在面包店使用了两个服务机器人作为促销活动。先前的研究探索了公共公共公众公共应用,例如购物中心。但是,需要更多的证据表明,服务机器人可以为真实商店的销售做出贡献。此外,在促销促销的背景下,客户和服务机器人的行为尚未得到很好的检查。因此,可以认为有效的机器人行为类型,并且客户对这些机器人的反应尚不清楚。为了解决这些问题,我们在面包店安装了两个远程操作的服务机器人将近2周,一个在入口处作为招待员,另一个在商店里推荐产品。结果表明,在应用机器人时,销售额急剧增加。此外,我们注释了机器人和客户行为的视频录制。我们发现,尽管放置在入口处的机器人成功吸引了路人的兴趣,但没有观察到访问商店的客户数量明显增加。但是,我们确认商店内部运行的机器人的建议确实产生了积极影响。我们详细讨论我们的发现,并为未来的研究和应用提供理论和实用建议。
translated by 谷歌翻译
时间是文档的重要方面,用于一系列NLP和IR任务。在这项工作中,我们研究了在预训练期间合并时间信息的方法,以进一步提高与时间相关的任务的性能。与Bert相比,使用同步文档收集(BooksCorpus和English Wikipedia)作为培训语料库相比,我们使用长跨度的时间新闻文章集合来构建单词表示。我们介绍了Timebert,这是一种新颖的语言表示模型,该模型通过两项新的预训练任务培训了新闻文章的临时收集,这些任务利用了两个不同的时间信号来构建时间认识的语言表示。实验结果表明,TimeBert始终胜过BERT和其他现有的预训练模型,在不同的下游NLP任务或应用程序上,时间很高的时间很重要。
translated by 谷歌翻译
Differentially private federated learning (DP-FL) has received increasing attention to mitigate the privacy risk in federated learning. Although different schemes for DP-FL have been proposed, there is still a utility gap. Employing central Differential Privacy in FL (CDP-FL) can provide a good balance between the privacy and model utility, but requires a trusted server. Using Local Differential Privacy for FL (LDP-FL) does not require a trusted server, but suffers from lousy privacy-utility trade-off. Recently proposed shuffle DP based FL has the potential to bridge the gap between CDP-FL and LDP-FL without a trusted server; however, there is still a utility gap when the number of model parameters is large. In this work, we propose OLIVE, a system that combines the merits from CDP-FL and LDP-FL by leveraging Trusted Execution Environment (TEE). Our main technical contributions are the analysis and countermeasures against the vulnerability of TEE in OLIVE. Firstly, we theoretically analyze the memory access pattern leakage of OLIVE and find that there is a risk for sparsified gradients, which is common in FL. Secondly, we design an inference attack to understand how the memory access pattern could be linked to the training data. Thirdly, we propose oblivious yet efficient algorithms to prevent the memory access pattern leakage in OLIVE. Our experiments on real-world data demonstrate that OLIVE is efficient even when training a model with hundreds of thousands of parameters and effective against side-channel attacks on TEE.
translated by 谷歌翻译
虽然深层模型实现了高预测性能,但人类难以理解他们所做的预测。解释性对于真实世界的应用来说是一个合理的可靠性。已经提出了许多基于示例的解释方法,例如代表点选择,其中由一组训练示例定义的说明模型用于解释预测模型。为了提高解释性,减少解释模型中的示例的数量是重要的。然而,具有较少示例的解释可以是不忠的,因为通过基于这样的示例的解释模型很难难以近似预测模型。不忠实的解释意味着可解释模型的预测与预测模型的预测不同。我们提出了一种培训深层模型的方法,使得他们的预测是通过具有少量示例的解释模型来忠实解释的预测。我们使用稀疏常规器同时培训预测和解释模型,以减少示例的数量。所提出的方法可以纳入任何基于神经网络的预测模型。使用多个数据集的实验表明,所提出的方法在保持预测性能的同时提高忠诚。
translated by 谷歌翻译
联邦学习(FL)是一个新兴机器学习范式,数据所有者可以在不共享其原始数据的情况下协作培训模型。 FL中的两个基本研究问题是激励机制和隐私保护。前者侧重于如何激励数据所有者参加FL。后者研究如何保护数据所有者的隐私,同时保持训练型模型的高效用。但是,FL中的激励机制和隐私保护已被分开研究,并且没有工作同时解决这两个问题。在这项工作中,我们通过提供适当的付款和隐私保护来解决飞行市场的两个问题,这会激励数据所有者的参与。 FL-Market使数据所有者能够根据本地差异隐私(LDP)量化的隐私损失来获得赔偿。我们的识别是,通过满足数据所有者的个性化隐私偏好并提供适当的付款,我们可以(1)激励隐私风险数据所有者设置更大的隐私参数(即,具有较少噪声的渐变)和(2)提供首选隐私保护对于隐私风险厌恶数据所有者。为实现这一目标,我们设计了一个基于LDP的FL框架,具有深度学习的拍卖机制,可以使用较少的噪音和最佳聚合机制激励交易私人模型,并将本地梯度聚合成准确的全局梯度。我们的实验验证了拟议的框架和机制的有效性。
translated by 谷歌翻译